Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(protocol): new way to calculate meta.difficulty (TKO-11) #15568

Merged
merged 4 commits into from
Jan 26, 2024

Conversation

dantaik
Copy link
Contributor

@dantaik dantaik commented Jan 25, 2024

Screenshot 2024-01-25 at 19 38 49

Copy link

vercel bot commented Jan 25, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Updated (UTC)
bridge-ui-v2-a6 ✅ Ready (Inspect) Visit Preview Jan 25, 2024 0:44am

@Brechtpd
Copy link
Contributor

What about just using block.prevrandao? Any benefits of artificially modifying the randomness per L2 if multiple ones are submitted in same L1 block or the randomness not matching the L1 value directly?

If we want to make it unique per L2 than hashing in address(this) would ensure this. But I don't immediately see a reason to make it more difficult (no pun intended) than just passing through block.prevrandao directly but I could be missing something.

@dantaik
Copy link
Contributor Author

dantaik commented Jan 25, 2024

prevrandao

I like the idea.

@dantaik
Copy link
Contributor Author

dantaik commented Jan 25, 2024

Wait, I think we want to make sure each L2 block has a different difficulty, otherwise L2 blocks proposed in the same L1 block always have the same min-tier...

@Brechtpd
Copy link
Contributor

I think realistically speaking, proposers will just avoid submitting blocks when it's not the cheapest tier. Because if a certain proof is cheap enough to not really make a difference on their profitability, that proof should just always be in the base tier because that's better security. And if the proof is very expensive, well it simply will not be profitable to submit a block with the fees users are used to paying for most of the other blocks. If a zk proof costs $20, well that's a pretty large spike in expected fees users have to pay that is not predictable. This compared to the pretty good predictability of L1 fees and L2 fees (at least for a short time) because of EIP-1559.

So I think randomness in the proofs will result in a worse user experience because either users have to overpay all the time (bad ux because higher fees), or the proposers will simply wait one (or more) L1 blocks so the randomness is more favorable and then be able to submit their block when it is actually profitable (bad ux because now transactions take longer to finalize).

@adaki2004
Copy link
Contributor

I think realistically speaking, proposers will just avoid submitting blocks when it's not the cheapest tier. Because if a certain proof is cheap enough to not really make a difference on their profitability, that proof should just always be in the base tier because that's better security. And if the proof is very expensive, well it simply will not be profitable to submit a block with the fees users are used to paying for most of the other blocks. If a zk proof costs $20, well that's a pretty large spike in expected fees users have to pay that is not predictable. This compared to the pretty good predictability of L1 fees and L2 fees (at least for a short time) because of EIP-1559.

So I think randomness in the proofs will result in a worse user experience because either users have to overpay all the time (bad ux because higher fees), or the proposers will simply wait one (or more) L1 blocks so the randomness is more favorable and then be able to submit their block when it is actually profitable (bad ux because now transactions take longer to finalize).

That would imply 100% of the blocks, shall have 100% of the proofs (sgx, zk, guardian) immediately there together - otherwise makes no sense anyways, no ? Because simply noone would want to propose something which costs more than "needed" ? 🤔

@Brechtpd
Copy link
Contributor

I think it enforces at least some fixed configuration without randomness. For example SGX + guardian as the base proofs (because both SGX and guardian are very cheap to verify, so it doesn't make much sense not to always have them, it won't impact user fees in any significant way). Then it could still have SGX + guardian + ZK as a way to override the SGX + guardian tier, because ZK at the base level may be expensive (remains to be seen though), but that would only be done when needed if the blockhash is actually wrong, not randomly, so users never have to pay the ZK cost normally (but we do lower security and increase the time to finality by not having it part of the base proofs).

@dantaik
Copy link
Contributor Author

dantaik commented Jan 26, 2024

In the future we can try some other ideas, for example, determining the minTier of block by the next proposeBlock transaction, or by using VRF. But for now, with minimal modification, we have the following 3 options:

  1. meta.difficulty = block.prevrandao - the simplest but the value is shared by multiple L2 blocks proposed in the same L1 block.
  2. meta.difficulty = keccak256(abi.encodePacked(block.prevrandao, b.numBlocks, block.number)); - the proposed change where each L2 block has its own pseudo number, proposers cannot influence the value.
  3. meta.difficulty = meta.blobHash ^ bytes32(block.prevrandao * b.numBlocks * block.number); - the current A6 impl, an issue with it is that block proposers can carefully construct blob to proactively influence the difficulty value.

Among the above 3 options. I still think #2 is preferred over the other 2.

@Brechtpd
Copy link
Contributor

For the difficulty value itself I think it is indeed fine, especially because contracts shouldn't depend on it much as well. Random minimum tiers I think is problematic but I guess that's another discussion.

@dantaik dantaik added this pull request to the merge queue Jan 26, 2024
Merged via the queue into alpha-6 with commit 8c4b48e Jan 26, 2024
14 checks passed
@dantaik dantaik deleted the new_l2_difficulty branch January 26, 2024 13:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants